This paper presents the work of restoring punctuation for ASR transcripts generated by multilingual ASR systems. The focus languages are English, Mandarin, and Malay which are three of the most popular languages in Singapore. To the best of our knowledge, this is the first system that can tackle punctuation restoration for these three languages simultaneously. Traditional approaches usually treat the task as a sequential labeling task, however, this work adopts a slot-filling approach that predicts the presence and type of punctuation marks at each word boundary. The approach is similar to the Masked-Language Model approach employed during the pre-training stages of BERT, but instead of predicting the masked word, our model predicts masked punctuation. Additionally, we find that using Jieba1 instead of only using the built-in SentencePiece tokenizer of XLM-R can significantly improve the performance of punctuating Mandarin transcripts. Experimental results on English and Mandarin IWSLT2022 datasets and Malay News show that the proposed approach achieved state-of-the-art results for Mandarin with 73.8% F1-score while maintaining a reasonable F1-score for English and Malay, i.e. 74.7% and 78% respectively. Our source code that allows reproducing the results and building a simple web-based application for demonstration purposes is available on Github.
translated by 谷歌翻译
Audio-visual speech recognition (AVSR) has gained remarkable success for ameliorating the noise-robustness of speech recognition. Mainstream methods focus on fusing audio and visual inputs to obtain modality-invariant representations. However, such representations are prone to over-reliance on audio modality as it is much easier to recognize than video modality in clean conditions. As a result, the AVSR model underestimates the importance of visual stream in face of noise corruption. To this end, we leverage visual modality-specific representations to provide stable complementary information for the AVSR task. Specifically, we propose a reinforcement learning (RL) based framework called MSRL, where the agent dynamically harmonizes modality-invariant and modality-specific representations in the auto-regressive decoding process. We customize a reward function directly related to task-specific metrics (i.e., word error rate), which encourages the MSRL to effectively explore the optimal integration strategy. Experimental results on the LRS3 dataset show that the proposed method achieves state-of-the-art in both clean and various noisy conditions. Furthermore, we demonstrate the better generality of MSRL system than other baselines when test set contains unseen noises.
translated by 谷歌翻译
量化在隐式/坐标神经网络中的作用仍未完全理解。我们注意到,在训练过程中使用规范的固定量化方案在训练过程中的网络重量分布发生变化,在训练过程中会导致低速表现不佳。在这项工作中,我们表明神经体重的不均匀量化会导致显着改善。具体而言,我们证明了群集量化可以改善重建。最后,通过表征量化和网络容量之间的权衡,我们证明使用二进制神经网络重建信号是可能的(而记忆效率低下)。我们在2D图像重建和3D辐射场上实验证明了我们的发现;并表明简单的量化方法和体系结构搜索可以使NERF的压缩至小于16KB,而性能损失最小(比原始NERF小323倍)。
translated by 谷歌翻译
鉴于对计算资源的限制(例如,模型大小,跑步内存)的限制,不断学习新课程而没有灾难性遗忘是一个具有挑战性的问题。为了解决这个问题,我们提出了一种简单有效的持续学习方法。我们的方法通过测量按样本分类不确定性来选择培训的历史数据。具体而言,我们通过观察数据的分类概率如何与添加到分类器嵌入中的平行扰动相比如何波动来测量不确定性。通过这种方式,与将扰动添加到原始数据相比,计算成本可以大大降低。 DCASE 2019任务1和ESC-50数据集的实验结果表明,我们所提出的方法优于基准的分类准确性和计算效率的基线连续学习方法,表明我们的方法可以有效,可以逐步学习新的课程,而无需用于灾难性环境的灾难性遗忘问题声音分类。
translated by 谷歌翻译
基于内部语言模型估计(ILME)语言模型(LM)融合已显示出明显改善的识别结果,而识别域内和跨域语音识别任务的常规浅融合。在本文中,我们试图将ILME方法应用于跨域代码转换语音识别(CSSR)工作。具体而言,我们的好奇心来自几个方面。首先,我们很好奇基于ILME的LM融合对内域和跨域CSSR任务的有效性。我们在不合并两个代码转换域的情况下对此进行验证。更重要的是,我们通过合并两个单语言数据集训练端到端(E2E)语音识别模型,并观察到拟议的基于ILME的LM Fusion对CSSR的功效。来自东南亚和另一个中国大陆CS数据集的SEAME的实验结果证明了拟议的基于ILME的LM融合方法的有效性。
translated by 谷歌翻译
在本文中,我们解决了Dcase 2022中提出的新的基于语言的音频检索任务。其次,我们表明,使用此体系结构以及对比度损失,该模型可以显着击败基线模型的性能。最后,除了具有极低的训练记忆需求之外,我们还可以使用预告片的型号,而无需对其进行预感。我们测试我们的方法,并表明使用方法的组合可以显着击败基线得分。
translated by 谷歌翻译
在部署后更新关键字发现(KWS)模型时,灾难性遗忘是一个棘手的挑战。如果KWS模型由于内存有限而进一步需要KWS模型,则此问题将更具挑战性。为了减轻此类问题,我们提出了一种新颖的多样性吸引的增量学习方法,名为Rainbow关键词(RK)。具体而言,拟议的RK方法引入了一种多样性意识的采样器,以通过计算分类不确定性来从历史和传入的关键字中选择多种设置。结果,RK方法可以逐步学习新任务,而无需忘记先验知识。此外,RK方法还提出了数据扩展和知识蒸馏损失功能,以在边缘设备上有效内存管理。实验结果表明,所提出的RK方法在与Google Speech命令数据集中最佳基线的平均准确性相比,绝对准确性获得了4.2%的绝对改善,所需的内存较少。这些脚本可在GitHub上找到。
translated by 谷歌翻译